首页> 外文OA文献 >Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning
【2h】

Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning

机译:基于XmL的深度学习交互式医学图像分割   图像特定的微调

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Convolutional neural networks (CNNs) have achieved state-of-the-artperformance for automatic medical image segmentation. However, they have notdemonstrated sufficiently accurate and robust results for clinical use. Inaddition, they are limited by the lack of image-specific adaptation and thelack of generalizability to previously unseen object classes. To address theseproblems, we propose a novel deep learning-based framework for interactivesegmentation by incorporating CNNs into a bounding box and scribble-basedsegmentation pipeline. We propose image-specific fine-tuning to make a CNNmodel adaptive to a specific test image, which can be either unsupervised(without additional user interactions) or supervised (with additionalscribbles). We also propose a weighted loss function considering network andinteraction-based uncertainty for the fine-tuning. We applied this framework totwo applications: 2D segmentation of multiple organs from fetal MR slices,where only two types of these organs were annotated for training; and 3Dsegmentation of brain tumor core (excluding edema) and whole brain tumor(including edema) from different MR sequences, where only tumor cores in one MRsequence were annotated for training. Experimental results show that 1) ourmodel is more robust to segment previously unseen objects than state-of-the-artCNNs; 2) image-specific fine-tuning with the proposed weighted loss functionsignificantly improves segmentation accuracy; and 3) our method leads toaccurate results with fewer user interactions and less user time thantraditional interactive segmentation methods.
机译:卷积神经网络(CNN)已实现了用于医学图像自动分割的最新技术。但是,他们尚未证明足够准确和可靠的结果可用于临床。此外,它们还受到缺乏特定于图像的适应以及缺乏对以前看不见的对象类别的可概括性的限制。为了解决这些问题,我们提出了一种新颖的基于深度学习的交互式分段框架,方法是将CNN合并到边界框和基于涂抹的分段管道中。我们提出特定于图像的微调,以使CNN模型适应特定的测试图像,该模型可以是无监督的(无需其他用户交互)或有监督的(有其他涂鸦)。我们还提出了一种加权损失函数,考虑了基于网络和交互的不确定性进行微调。我们将该框架应用于两个应用:胎儿MR切片中多个器官的2D分割,其中仅两种类型的器官被标注用于训练;和来自不同MR序列的脑肿瘤核心(水肿除外)和全脑肿瘤(包括水肿)的3D分割,其中仅注释一个MR序列中的肿瘤核心进行训练。实验结果表明:1)我们的模型比最新的CNN更加健壮,可以分割以前看不见的物体; 2)具有建议的加权损失函数的图像特定微调显着提高了分割精度;和3)与传统的互动细分方法相比,我们的方法可通过较少的用户互动和更少的用户时间来获得准确的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号